1,535 research outputs found
A SERVER HARDENING FRAMEWORK
There have been several attempts at improving the security of servers in all the fields be it web servers like apache tomcat ,mail servers like wamp etc. Checklists have been made for different servers from time to time which contains a list of steps that have to be followed in order to improve the security of the particular server. So the user has to have all the basic knowledge about the server before he can make use of the checklist and secure the server. This is the first problem that the user has to be well versed in the basic technicalities of the server configuration before he can secure it for use. Secondly ,till now there is no tool or framework that can bring all the different types of servers together under it so that a single framework can be used to harden or secure multiple number of servers and without any knowledge about the basic configuration of the servers. Hence, we propose to automate the server hardening process by creating a Framework which will be open source and hence new servers could be included in it by users by editing the open source code of the framework which would be in python language. A server hardening framework would help even a person with a layman understanding to secure the server which he is using. He would be able to use the framework for hardening a multiple types of servers as per his requirements. The Framework will provide an option of AUDITING as well as HARDENING. If the User chooses the AUDITING option , then the parameters of the server configuration file would be displayed along with the current values as well as it would be mentioned additionally for the parameters if a particular parameter requires hardening and again the user would be asked if he wants to harden it or not. In case of choosing hardening, the server configuration file would be replaced by hardened file and server be restarted
Assessment of Seasonal and Site-Speci�c Variations in Soil Physical,Chemical and Biological Properties Around Opencast Coal Mines
Coal mining adversely affects soil quality around opencast mines. Therefore, a study was conducted in 2010 and 2011 to assess seasonal and site-specific variations in physical, chemical, and biological properties of soil collected at different distances from mining areas in the Jharia coalfield, India. Throughout the year, the soil in sites near coal mines had a significantly higher bulk density, temperature, electrical conductivity, and sulfate and heavy metal contents and a significantly lower water-holding capacity, porosity, moisture content, pH, and total nitrogen and available phosphorus contents, compared with the soil collected far from the mines.
However, biological properties were site-specific and seasonal. Soil microbial biomass carbon (MBC) and nitrogen (MBN), MBC/MBN,and soil respiration were the highest during the rainy season and the lowest in summer, with the minimum values in the soil near coal mines. A soil quality index revealed a significant effect of heavy metal content on soil biological properties in the coal mining areas
AxoNN: An asynchronous, message-driven parallel framework for extreme-scale deep learning
In the last few years, the memory requirements to train state-of-the-art
neural networks have far exceeded the DRAM capacities of modern hardware
accelerators. This has necessitated the development of efficient algorithms to
train these neural networks in parallel on large-scale GPU-based clusters.
Since computation is relatively inexpensive on modern GPUs, designing and
implementing extremely efficient communication in these parallel training
algorithms is critical for extracting the maximum performance. This paper
presents AxoNN, a parallel deep learning framework that exploits asynchrony and
message-driven execution to schedule neural network operations on each GPU,
thereby reducing GPU idle time and maximizing hardware efficiency. By using the
CPU memory as a scratch space for offloading data periodically during training,
AxoNN is able to reduce GPU memory consumption by four times. This allows us to
increase the number of parameters per GPU by four times, thus reducing the
amount of communication and increasing performance by over 13%. When tested
against large transformer models with 12-100 billion parameters on 48-384
NVIDIA Tesla V100 GPUs, AxoNN achieves a per-GPU throughput of 49.4-54.78% of
theoretical peak and reduces the training time by 22-37 days (15-25% speedup)
as compared to the state-of-the-art.Comment: Proceedings of the IEEE International Parallel & Distributed
Processing Symposium (IPDPS). IEEE Computer Society, May 202
Explosive Remnants of War: A War after the War?
Explosive Remnants of War (ERW) pose significant
humanitarian problems to the civilians as well as to the
governments in post conflict situations. People continue
to be at risk even after the war due to the presence of
ERW. The issue of ERW has in fact shifted the focus of the
international community from the immediate impacts of
the weapons to their long term effects. In response to this,
states concluded a landmark agreement, Protocol V to the
UN Convention on Certain Conventional Weapons in
2003 (CCW). This Protocol aims at providing a proper
mechanism to deal with ERW threat. Meanwhile, with the
beginning of the new century and the emergence of newly
sophisticated weapons the debate over the ERW got
shifted to one of the most menacing category of weapons
called cluster munitions. Again, responding to the
problem, the state parties adopted the Convention of
Cluster Munitions 2003 which bans the use and
development of these deadly weapons. Both these
instruments suffer from certain inherent limitations.
Despite these limitations they still serve as the last resort
for the civilians as well as for the governments of the war
torn communities in dealing with the catastrophic effects
of ERW
Entanglement on linked boundaries in Chern-Simons theory with generic gauge groups
We study the entanglement for a state on linked torus boundaries in
Chern-Simons theory with a generic gauge group and present the asymptotic
bounds of R\'enyi entropy at two different limits: (i) large Chern-Simons
coupling , and (ii) large rank of the gauge group. These results show
that the R\'enyi entropies cannot diverge faster than and ,
respectively. We focus on torus links with topological linking number
. The R\'enyi entropy for these links shows a periodic structure in and
vanishes whenever , where the integer
is a function of coupling and rank . We highlight that the
refined Chern-Simons link invariants can remove such a periodic structure in
.Comment: 31 pages, 5 figure
Śāntarakṣita and Kamalaśīla on the Advaita Vedanta Theory of a Self
In this article we assess Śāntarakṣita’s and Kamalaśīla’s critique of the Advaita Vedānta theory of self. We provide a translation of the verses 328-335 of the commentary titled Tattvasaṃgrahapañjikā, which was composed by Kamalaśīla on Śāntarakṣita’s Tattvasaṃgraha. We present Śāntarakṣita’s and Kamalaśīla’s views of a self and also explain the Advaita Vedānta theory based on the texts of Śaṅkara. It is concluded in the article that Śāntarakṣita and Kamalaśīla failed to consider the most likely Advaitin replies to their objections, especially the reply that cognitions of objects are illusory rather than real modifications, since the critique assumed that they were real modifications
Jorge: Approximate Preconditioning for GPU-efficient Second-order Optimization
Despite their better convergence properties compared to first-order
optimizers, second-order optimizers for deep learning have been less popular
due to their significant computational costs. The primary efficiency bottleneck
in such optimizers is matrix inverse calculations in the preconditioning step,
which are expensive to compute on GPUs. In this paper, we introduce Jorge, a
second-order optimizer that promises the best of both worlds -- rapid
convergence benefits of second-order methods, and high computational efficiency
typical of first-order methods. We address the primary computational bottleneck
of computing matrix inverses by completely eliminating them using an
approximation of the preconditioner computation. This makes Jorge extremely
efficient on GPUs in terms of wall-clock time. Further, we describe an approach
to determine Jorge's hyperparameters directly from a well-tuned SGD baseline,
thereby significantly minimizing tuning efforts. Our empirical evaluations
demonstrate the distinct advantages of using Jorge, outperforming
state-of-the-art optimizers such as SGD, AdamW, and Shampoo across multiple
deep learning models, both in terms of sample efficiency and wall-clock time
- …